skip to main content


Search for: All records

Creators/Authors contains: "Witharana, Chandi"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This paper assesses trending AI foundation models, especially emerging computer vision foundation models and their performance in natural landscape feature segmentation. While the term foundation model has quickly garnered interest from the geospatial domain, its definition remains vague. Hence, this paper will first introduce AI foundation models and their defining characteristics. Built upon the tremendous success achieved by Large Language Models (LLMs) as the foundation models for language tasks, this paper discusses the challenges of building foundation models for geospatial artificial intelligence (GeoAI) vision tasks. To evaluate the performance of large AI vision models, especially Meta’s Segment Anything Model (SAM), we implemented different instance segmentation pipelines that minimize the changes to SAM to leverage its power as a foundation model. A series of prompt strategies were developed to test SAM’s performance regarding its theoretical upper bound of predictive accuracy, zero-shot performance, and domain adaptability through fine-tuning. The analysis used two permafrost feature datasets, ice-wedge polygons and retrogressive thaw slumps because (1) these landform features are more challenging to segment than man-made features due to their complicated formation mechanisms, diverse forms, and vague boundaries; (2) their presence and changes are important indicators for Arctic warming and climate change. The results show that although promising, SAM still has room for improvement to support AI-augmented terrain mapping. The spatial and domain generalizability of this finding is further validated using a more general dataset EuroCrops for agricultural field mapping. Finally, we discuss future research directions that strengthen SAM’s applicability in challenging geospatial domains.

     
    more » « less
    Free, publicly-accessible full text available March 1, 2025
  2. Plot-level photography is an attractive time-saving alternative to field measurements for vegetation monitoring. However, widespread adoption of this technique relies on efficient workflows for post-processing images and the accuracy of the resulting products. Here, we estimated relative vegetation cover using both traditional field sampling methods (point frame) and semi-automated classification of photographs (plot-level photography) across thirty 1 m2 plots near Utqiaġvik, Alaska, from 2012 to 2021. Geographic object-based image analysis (GEOBIA) was applied to generate objects based on the three spectral bands (red, green, and blue) of the images. Five machine learning algorithms were then applied to classify the objects into vegetation groups, and random forest performed best (60.5% overall accuracy). Objects were reliably classified into the following classes: bryophytes, forbs, graminoids, litter, shadows, and standing dead. Deciduous shrubs and lichens were not reliably classified. Multinomial regression models were used to gauge if the cover estimates from plot-level photography could accurately predict the cover estimates from the point frame across space or time. Plot-level photography yielded useful estimates of vegetation cover for graminoids. However, the predictive performance varied both by vegetation class and whether it was being used to predict cover in new locations or change over time in previously sampled plots. These results suggest that plot-level photography may maximize the efficient use of time, funding, and available technology to monitor vegetation cover in the Arctic, but the accuracy of current semi-automated image analysis is not sufficient to detect small changes in cover. 
    more » « less
  3. Rapid global warming is catalyzing widespread permafrost degradation in the Arctic, leading to destructive land-surface subsidence that destabilizes and deforms the ground. Consequently, human-built infrastructure constructed upon permafrost is currently at major risk of structural failure. Risk assessment frameworks that attempt to study this issue assume that precise information on the location and extent of infrastructure is known. However, complete, high-quality, uniform geospatial datasets of built infrastructure that are readily available for such scientific studies are lacking. While imagery-enabled mapping can fill this knowledge gap, the small size of individual structures and vast geographical extent of the Arctic necessitate large volumes of very high spatial resolution remote sensing imagery. Transforming this ‘big’ imagery data into ‘science-ready’ information demands highly automated image analysis pipelines driven by advanced computer vision algorithms. Despite this, previous fine resolution studies have been limited to manual digitization of features on locally confined scales. Therefore, this exploratory study serves as the first investigation into fully automated analysis of sub-meter spatial resolution satellite imagery for automated detection of Arctic built infrastructure. We tasked the U-Net, a deep learning-based semantic segmentation model, with classifying different infrastructure types (residential, commercial, public, and industrial buildings, as well as roads) from commercial satellite imagery of Utqiagvik and Prudhoe Bay, Alaska. We also conducted a systematic experiment to understand how image augmentation can impact model performance when labeled training data is limited. When optimal augmentation methods were applied, the U-Net achieved an average F1 score of 0.83. Overall, our experimental findings show that the U-Net-based workflow is a promising method for automated Arctic built infrastructure detection that, combined with existing optimized workflows, such as MAPLE, could be expanded to map a multitude of infrastructure types spanning the pan-Arctic.

     
    more » « less
  4. Retrogressive thaw slumps (RTS) are considered one of the most dynamic permafrost disturbance features in the Arctic. Sub-meter resolution multispectral imagery acquired by very high spatial resolution (VHSR) commercial satellite sensors offer unique capacities in capturing the morphological dynamics of RTSs. The central goal of this study is to develop a deep learning convolutional neural net (CNN) model (a UNet-based workflow) to automatically detect and characterize RTSs from VHSR imagery. We aimed to understand: (1) the optimal combination of input image tile size (array size) and the CNN network input size (resizing factor/spatial resolution) and (2) the interoperability of the trained UNet models across heterogeneous study sites based on a limited set of training samples. Hand annotation of RTS samples, CNN model training and testing, and interoperability analyses were based on two study areas from high-Arctic Canada: (1) Banks Island and (2) Axel Heiberg Island and Ellesmere Island. Our experimental results revealed the potential impact of image tile size and the resizing factor on the detection accuracies of the UNet model. The results from the model transferability analysis elucidate the effects on the UNet model due the variability (e.g., shape, color, and texture) associated with the RTS training samples. Overall, study findings highlight several key factors that we should consider when operationalizing CNN-based RTS mapping over large geographical extents. 
    more » « less
  5. Commercial satellite sensors offer the luxury of mapping of individual permafrost features and their change over time. Deep learning convolutional neural nets (CNNs) demonstrate a remarkable success in automated image analysis. Inferential strengths ofCNNmodels are driven primarily by the quality and volume of hand-labeled training samples. Production of hand-annotated samples is a daunting task. This is particularly true for regional-scale mapping applications, such as permafrost feature detection across the Arctic. Image augmentation is a strategic data-space solution to synthetically inflate the size and quality of training samples by transforming the color space or geometric shape or by injecting noise. In this study, we systematically investigate the effectiveness of a spectrum of augmentation methods when applied toCNNalgorithms to recognize ice-wedge polygons from commercial satellite imagery. Our findings suggest that a list of augmentation methods (such as hue, saturation, and salt and pepper noise) can increase the model performance.

     
    more » « less
  6. Data are available for download at http://arcticdata.io/data/10.18739/A2KW57K57 Permafrost can be indirectly detected via remote sensing techniques through the presence of ice-wedge polygons, which are a ubiquitous ground surface feature in tundra regions. Ice-wedge polygons form through repeated annual cracking of the ground during cold winter days. In spring, the cracks fill in with snowmelt water, creating ice wedges, which are connected across the landscape in an underground network and that can grow to several meters depth and width. The growing ice wedges push the soil upwards, forming ridges that bound low-centered ice-wedge polygons. If the top of the ice wedge melts, the ground subsides and the ridges become troughs and the ice-wedge polygons become high-centered. Here, a Convolutional Neural Network is used to map the boundaries of individual ice-wedge polygons based on high-resolution commercial satellite imagery obtained from the Polar Geospatial Center. This satellite imagery used for the detection of ice-wedge polygons represent years between 2001 and 2021, so this dataset represents ice-wedge polygons mapped from different years. This dataset does not include a time series (i.e. same area mapped more than once). The shapefiles are masked, reprojected, and processed into GeoPackages with calculated attributes for each ice-wedge polygon such as circumference and width. The GeoPackages are then rasterized with new calculated attributes for ice-wedge polygon coverage such a coverage density. This release represents the region classified as “high ice” by Brown et al. 1997. The dataset is available to explore on the Permafrost Discovery Gateway (PDG), an online platform that aims to make big geospatial permafrost data accessible to enable knowledge-generation by researchers and the public. The PDG project creates various pan-Arctic data products down to the sub-meter and monthly resolution. Access the PDG Imagery Viewer here: https://arcticdata.io/catalog/portals/permafrost Data limitations in use: This data is part of an initial release of the pan-Arctic data product for ice-wedge polygons, and it is expected that there are constraints on its accuracy and completeness. Users are encouraged to provide feedback regarding how they use this data and issues they encounter during post-processing. Please reach out to the dataset contact or a member of the PDG team via support@arcticdata.io. 
    more » « less
  7. High-spatial-resolution satellite imagery enables transformational opportunities to observe, map, and document the micro-topographic transitions occurring in Arctic polygonal tundra at multiple spatial and temporal frequencies. Knowledge discovery through artificial intelligence, big imagery, and high-performance computing (HPC) resources is just starting to be realized in Arctic permafrost science. We have developed a novel high-performance image-analysis framework—Mapping Application for Arctic Permafrost Land Environment (MAPLE)—that enables the integration of operational-scale GeoAI capabilities into Arctic permafrost modeling. Interoperability across heterogeneous HPC systems and optimal usage of computational resources are key design goals of MAPLE. We systematically compared the performances of four different MAPLE workflow designs on two HPC systems. Our experimental results on resource utilization, total time to completion, and overhead of the candidate designs suggest that the design of an optimal workflow largely depends on the HPC system architecture and underlying service-unit accounting model. 
    more » « less
  8. Advanced deep learning methods combined with regional, open access, airborne Light Detection and Ranging (LiDAR) data have great potential to study the spatial extent of historic land use features preserved under the forest canopy throughout New England, a region in the northeastern United States. Mapping anthropogenic features plays a key role in understanding historic land use dynamics during the 17th to early 20th centuries, however previous studies have primarily used manual or semi-automated digitization methods, which are time consuming for broad-scale mapping. This study applies fully-automated deep convolutional neural networks (i.e., U-Net) with LiDAR derivatives to identify relict charcoal hearths (RCHs), a type of historical land use feature. Results show that slope, hillshade, and Visualization for Archaeological Topography (VAT) rasters work well in six localized test regions (spatial scale: <1.5 km2, best F1 score: 95.5%), but also at broader extents at the town level (spatial scale: 493 km2, best F1 score: 86%). The model performed best in areas with deciduous forest and high slope terrain (e.g., >15 degrees) (F1 score: 86.8%) compared to coniferous forest and low slope terrain (e.g., <15 degrees) (F1 score: 70.1%). Overall, our results contribute to current methodological discussions regarding automated extraction of historical cultural features using deep learning and LiDAR. 
    more » « less